6 research outputs found

    Covering Metric Spaces by Few Trees

    Get PDF
    A tree cover of a metric space (X,d) is a collection of trees, so that every pair x,y in X has a low distortion path in one of the trees. If it has the stronger property that every point x in X has a single tree with low distortion paths to all other points, we call this a Ramsey tree cover. Tree covers and Ramsey tree covers have been studied by [Yair Bartal et al., 2005; Anupam Gupta et al., 2004; T-H. Hubert Chan et al., 2005; Gupta et al., 2006; Mendel and Naor, 2007], and have found several important algorithmic applications, e.g. routing and distance oracles. The union of trees in a tree cover also serves as a special type of spanner, that can be decomposed into a few trees with low distortion paths contained in a single tree; Such spanners for Euclidean pointsets were presented by [S. Arya et al., 1995]. In this paper we devise efficient algorithms to construct tree covers and Ramsey tree covers for general, planar and doubling metrics. We pay particular attention to the desirable case of distortion close to 1, and study what can be achieved when the number of trees is small. In particular, our work shows a large separation between what can be achieved by tree covers vs. Ramsey tree covers

    Barriers for Faster Dimensionality Reduction

    Get PDF

    Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for Practical Measures

    Get PDF
    It is well known that the Johnson-Lindenstrauss dimensionality reduction method is optimal for worst case distortion. While in practice many other methods and heuristics are used, not much is known in terms of bounds on their performance. The question of whether the JL method is optimal for practical measures of distortion was recently raised in BFN19 (NeurIPS'19). They provided upper bounds on its quality for a wide range of practical measures and showed that indeed these are best possible in many cases. Yet, some of the most important cases, including the fundamental case of average distortion were left open. In particular, they show that the JL transform has 1+ϵ1+\epsilon average distortion for embedding into kk-dimensional Euclidean space, where k=O(1/ϵ2)k=O(1/\epsilon^2), and for more general qq-norms of distortion, k=O(max{1/ϵ2,q/ϵ})k = O(\max\{1/\epsilon^2,q/\epsilon\}), whereas tight lower bounds were established only for large values of qq via reduction to the worst case. In this paper we prove that these bounds are best possible for any dimensionality reduction method, for any 1qO(log(2ϵ2n)ϵ)1 \leq q \leq O(\frac{\log (2\epsilon^2 n)}{\epsilon}) and ϵ1n\epsilon \geq \frac{1}{\sqrt{n}}, where nn is the size of the subset of Euclidean space. Our results imply that the JL method is optimal for various distortion measures commonly used in practice such as stress, energy and relative error. We prove that if any of these measures is bounded by ϵ\epsilon then k=Ω(1/ϵ2)k=\Omega(1/\epsilon^2) for any ϵ1n\epsilon \geq \frac{1}{\sqrt{n}}, matching the upper bounds of BFN19 and extending their tightness results for the full range moment analysis. Our results may indicate that the JL dimensionality reduction method should be considered more often in practical applications, and the bounds we provide for its quality should be served as a measure for comparison when evaluating the performance of other methods and heuristics

    Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for Practical Measures

    Get PDF

    Barriers for Faster Dimensionality Reduction

    Full text link
    The Johnson-Lindenstrauss transform allows one to embed a dataset of nn points in Rd\mathbb{R}^d into Rm,\mathbb{R}^m, while preserving the pairwise distance between any pair of points up to a factor (1±ε)(1 \pm \varepsilon), provided that m=Ω(ε2lgn)m = \Omega(\varepsilon^{-2} \lg n). The transform has found an overwhelming number of algorithmic applications, allowing to speed up algorithms and reducing memory consumption at the price of a small loss in accuracy. A central line of research on such transforms, focus on developing fast embedding algorithms, with the classic example being the Fast JL transform by Ailon and Chazelle. All known such algorithms have an embedding time of Ω(dlgd)\Omega(d \lg d), but no lower bounds rule out a clean O(d)O(d) embedding time. In this work, we establish the first non-trivial lower bounds (of magnitude Ω(mlgm)\Omega(m \lg m)) for a large class of embedding algorithms, including in particular most known upper bounds
    corecore